74 research outputs found

    Biologically inspired composite image sensor for deep field target tracking

    Get PDF
    The use of nonuniform image sensors in mobile based computer vision applications can be an effective solution when computational burden is problematic. Nonuniform image sensors are still in their infancy and as such have not been fully investigated for their unique qualities nor have they been extensively applied in practice. In this dissertation a system has been developed that can perform vision tasks in both the far field and the near field. In order to accomplish this, a new and novel image sensor system has been developed. Inspired by the biological aspects of the visual systems found in both falcons and primates, a composite multi-camera sensor was constructed. The sensor provides for expandable visual range, excellent depth of field, and produces a single compact output image based on the log-polar retinal-cortical mapping that occurs in primates. This mapping provides for scale and rotational tolerant processing which, in turn, supports the mitigation of perspective distortion found in strict Cartesian based sensor systems. Furthermore, the scale-tolerant representation of objects moving on trajectories parallel to the sensor\u27s optical axis allows for fast acquisition and tracking of objects moving at high rates of speed. In order to investigate how effective this combination would be for object detection and tracking at both near and far field, the system was tuned for the application of vehicle detection and tracking from a moving platform. Finally, it was shown that the capturing of license plate information in an autonomous fashion could easily be accomplished from the extraction of information contained in the mapped log-polar representation space. The novel composite log-polar deep-field image sensor opens new horizons for computer vision. This current work demonstrates features that can benefit applications beyond the high-speed vehicle tracking for drivers assistance and license plate capture. Some of the future applications envisioned include obstacle detection for high-speed trains, computer assisted aircraft landing, and computer assisted spacecraft docking

    Embed Me If You Can: A Geometric Perceptron

    Full text link
    Solving geometric tasks involving point clouds by using machine learning is a challenging problem. Standard feed-forward neural networks combine linear or, if the bias parameter is included, affine layers and activation functions. Their geometric modeling is limited, which motivated the prior work introducing the multilayer hypersphere perceptron (MLHP). Its constituent part, i.e., hypersphere neuron, is obtained by applying a conformal embedding of Euclidean space. By virtue of Clifford algebra, it can be implemented as the Cartesian dot product of inputs and weights. If the embedding is applied in a manner consistent with the dimensionality of the input space geometry, the decision surfaces of the model units become combinations of hyperspheres and make the decision-making process geometrically interpretable for humans. Our extension of the MLHP model, the multilayer geometric perceptron (MLGP), and its respective layer units, i.e., geometric neurons, are consistent with the 3D geometry and provide a geometric handle of the learned coefficients. In particular, the geometric neuron activations are isometric in 3D. When classifying the 3D Tetris shapes, we quantitatively show that our model requires no activation function in the hidden layers other than the embedding to outperform the vanilla multilayer perceptron. In the presence of noise in the data, our model is also superior to the MLHP

    Augment Features Beyond Color for Domain Generalized Segmentation

    Full text link
    Domain generalized semantic segmentation (DGSS) is an essential but highly challenging task, in which the model is trained only on source data and any target data is not available. Previous DGSS methods can be partitioned into augmentation-based and normalization-based ones. The former either introduces extra biased data or only conducts channel-wise adjustments for data augmentation, and the latter may discard beneficial visual information, both of which lead to limited performance in DGSS. Contrarily, our method performs inter-channel transformation and meanwhile evades domain-specific biases, thus diversifying data and enhancing model generalization performance. Specifically, our method consists of two modules: random image color augmentation (RICA) and random feature distribution augmentation (RFDA). RICA converts images from RGB to the CIELAB color model and randomizes color maps in a perception-based way for image enhancement purposes. We further this augmentation by extending it beyond color to feature space using a CycleGAN-based generative network, which complements RICA and further boosts generalization capability. We conduct extensive experiments, and the generalization results from the synthetic GTAV and SYNTHIA to the real Cityscapes, BDDS, and Mapillary datasets show that our method achieves state-of-the-art performance in DGSS

    TetraSphere: A Neural Descriptor for O(3)-Invariant Point Cloud Analysis

    Full text link
    Rotation invariance is an important requirement for the analysis of 3D point clouds. In this paper, we present a learnable descriptor for rotation- and reflection-invariant 3D point cloud analysis based on recently introduced steerable 3D spherical neurons and vector neurons. Specifically, we show the compatibility of the two approaches and apply steerable neurons in an end-to-end method, which both constitute the technical novelty. In our approach, we perform TetraTransform -- which lifts the 3D input to an equivariant 4D representation, constructed by the steerable neurons -- and extract deeper rotation-equivariant features using vector neurons. This integration of the TetraTransform into the VN-DGCNN framework, termed TetraSphere, inexpensively increases the number of parameters by less than 0.0007%. Taking only points as input, TetraSphere sets a new state-of-the-art performance classifying randomly rotated real-world object scans of the hardest subset of ScanObjectNN, even when trained on data without additional rotation augmentation. Additionally, TetraSphere demonstrates the second-best performance segmenting parts of the synthetic ShapeNet, consistently outperforming the baseline VN-DGCNN. All in all, our results reveal the practical value of steerable 3D spherical neurons for learning in 3D Euclidean space

    Аналіз способів регулювання розміру часток діоксиду кремнію, отриманих методом Штобера

    Get PDF
    The object of research is the method of synthesis of silicon dioxide nanoparticles, namely the Stober method. Synthesis of particles with the help of the Stober process is an example of a sol-gel method, one of the most practical and controlled methods for obtaining controlled size nanoparticles, shapes and morphologies. The Stober method is a classical approach to the synthesis of silica nanoparticles, but in existing works there is no systematic approach to establishing a connection between such reaction parameters as the concentration of components, temperature and time of the process. During the research, various types of information retrieval and information research were used. As a result of this work, a survey is obtained that is able to solve the problem of systematizing the influence of these parameters under the conditions of the Stober process. Methods for regulating the size of silica particles are considered, namely, a change in: a temperature in a sufficiently wide range from 5 ºC to 65 ºC; TEOS/H2O/NH3 concentration; quantity and thermodynamic quality of the solvent, as well as the effect of the reaction time. The influence of these parameters is considered not only from the point of view of changing the unit parameter, but also in combination with the others. The regularities of the particle diameter variation for the main synthesis conditions are established. The ways of particle synthesis by the Stober method from hundreds of nanometers to micrometers are shown. It is shown that for the synthesis of particles with minimal dimensions, a decrease in the concentration of the reacting components will be necessary: TEOS, H2O and NH3. This makes it possible to reduce the rate of hydrolysis and condensation processes, as well as the solubility of the intermediate Si(OC2H5)4-X(OH)X], which determines the absence of supersaturation during nucleation. The determining factors for this decrease are the increased synthesis temperature and the use of more polar solvents. The results of the work can be used to control the synthesis of silicon dioxide nanoparticles for various applications, from catalytic systems to functional fillers of materials and in particular to the creation of superhydrophobic structures.Объектом исследования является метод синтеза наночастиц диоскида кремния, а именно метод Штобера. Синтез частиц с помощью Штобер-процесса является примером золь-гель метода – одного из наиболее практичных и регулируемых способов получения наночастиц регулируемого размера, формы и морфологии. Метод Штобера является классическим подходом к синтезу наночастиц диоксида кремния, однако в существующих работах отсутствует систематический подход к установлению связи между такими параметрами реакции, как концентрации компонентов, температура и время проведения процесса. В ходе исследования использовались разные виды информационного поиска и изучения информации. В результате работы получен обзор, который способен решить задачу систематизации влияния указанных параметров в условиях Штобер-процесса. Рассмотрены способы регулирования размеров частиц кремнезема, а именно изменение: температуры в достаточно широком диапазоне от 5 ºС до 65 ºС; концентрации TEOS/H2O/NH3; количества и термодинамического качества растворителя, а также влияние времени проведения реакции. Влияние указанных параметров рассматривается не только с точки зрения изменения единичного параметра, а и в комплексе с остальными. Установлены закономерности изменения диаметра частиц для главных условий синтеза. Показаны пути синтеза частиц методом Штобера от сотен нанометров до микрометров. Показано, что для синтеза частиц с минимальными размерами необходимым будет снижение концентрации реагирующих компонентов: TEOS, H2O и NH3. Это позволяет снизить скорость процессов гидролиза и конденсации, а также растворимость промежуточного [Si(OC2H5)4-X(OH)X], что определяет отсутствие перенасыщения в процессе зародышеобразования. Определяющими факторами такого снижения являются повышенная температура синтеза и использование более полярных растворителей. Результаты работы могут быть использованы для управления синтезом наночастиц диоксида кремния для различных применений – от каталитических систем до функциональных наполнителей материалов и, в частности, создания супергидрофобных структур.Об'єктом дослідження є метод синтезу наночастинок діоскиду кремнію, а саме метод Штобера. Синтез частинок за допомогою Штобер-процесу є прикладом золь-гель методу – одного з найбільш практичних і регульованих способів отримання наночастинок регульованого розміру, форми і морфології. Метод Штобера є класичним підходом до синтезу наночастинок діоксиду кремнію, проте в існуючих роботах відсутній систематичний підхід до встановлення зв'язку між такими параметрами реакції, як концентрації компонентів, температура і час проведення процесу. В ході дослідження використовувалися різні види інформаційного пошуку і вивчення інформації. В результаті роботи отримано огляд, який здатний вирішити завдання систематизації впливу зазначених параметрів в умовах Штобер-процесу. Розглянуто способи регулювання розмірів частинок кремнезему, а саме зміна: температури в досить широкому діапазоні від 5 ºС до 65 ºС; концентрації TEOS/H2O/NH3; кількості і термодинамічної якості розчинника, а також вплив часу проведення реакції. Вплив зазначених параметрів розглядається не тільки з точки зору зміни окремого параметра, а й в комплексі з іншими. Встановлено закономірності зміни діаметра частинок для головних умов синтезу. Показано шляхи синтезу частинок методом Штобера від сотень нанометрів до мікрометрів. Показано, що для синтезу частинок з мінімальними розмірами необхідним буде зниження концентрації реагуючих компонентів: TEOS, H2O і NH3. Це дозволяє знизити швидкість процесів гідролізу і конденсації, а також розчинність проміжного [Si(OC2H5)4-X(OH)X], що визначає відсутність перенасичення в процесі зародкоутворення. Визначальними факторами такого зниження є підвищена температура синтезу і використання більш полярних розчинників. Результати роботи можуть бути використані для управління синтезом наночастинок діоксиду кремнію для різних застосувань – від каталітичних систем до функціональних наповнювачів матеріалів і, зокрема, створення супергідрофобних структур

    Development of an error correction method using perfect binary arrays

    Get PDF
    The research focuses on an innovative error correction method that uses perfect binary arrays (PBAs), a powerful mathematical tool with unique properties that make it ideal for error correction. The research is aimed at studying the impact of uncorrelated mixed-type errors in the data exchange path, which allows using it in smart technologies with limited computing capabilities. The effectiveness of the approach is confirmed by simulation and comparison with other error correction methods. In order to further study the structural, cross-correlation and distance properties of orthogonal two-dimensional codes and the correcting capabilities of the proposed method, an information technology system for data transmission based on an equivalent class of perfect binary arrays has been developed. The proposed model evaluates the performance of the error correction code based on perfect binary arrays under various conditions, including correlated and uncorrelated interference and data exchange paths. A generator of PBA of equivalent classes has been built. An experimental evaluation of the correcting ability of the proposed two-dimensional codes was carried out by simulating various pre-code situations, including packet and random errors, for the cases of correlated and uncorrelated interference. Using a graphical interface, users will be able to enter the number and type of errors, determine whether they are random or packet errors, manually or automatically, move errors through the data packet, and view intermediate results. Thus, the complex nature of this study can be positioned as a promising approach and a reliable choice in the field of error correctio

    Sliding friction in a gear pair

    Get PDF
    Для вирішення проблеми, пов'язаної з підвищенням ефективності роботи зубчастих приводів машин, авторами запропоновано метод визначення втрат на тертя ковзання в парі зубчастих коліс. Отримано аналітичні залежності, які дозволяють визначити втрати потужності на тертя ковзання залежно від кінематичних, силових і міцнісних факторів, властивостей мастил і матеріалів коліс.To solve the problem related with improving the performance of gear drives of machines, authors propose a method for determining sliding friction losses in a gear pair. Analytical dependencies were obtained that allow determining power losses due to sliding friction depending on kinematic, force and strength factors, properties of lubricants and wheel materials

    Analysis of Clonal Type-Specific Antibody Reactions in Toxoplasma gondii Seropositive Humans from Germany by Peptide-Microarray

    Get PDF
    BACKGROUND: Different clonal types of Toxoplasma gondii are thought to be associated with distinct clinical manifestations of infections. Serotyping is a novel technique which may allow to determine the clonal type of T. gondii humans are infected with and to extend typing studies to larger populations which include infected but non-diseased individuals. METHODOLOGY: A peptide-microarray test for T. gondii serotyping was established with 54 previously published synthetic peptides, which mimic clonal type-specific epitopes. The test was applied to human sera (n = 174) collected from individuals with an acute T. gondii infection (n = 21), a latent T. gondii infection (n = 53) and from T. gondii-seropositive forest workers (n = 100). FINDINGS: The majority (n = 124; 71%) of all T. gondii seropositive human sera showed reactions against synthetic peptides with sequences specific for clonal type II (type II peptides). Type I and type III peptides were recognized by 42% (n = 73) or 16% (n = 28) of the human sera, respectively, while type II-III, type I-III or type I-II peptides were recognized by 49% (n = 85), 36% (n = 62) or 14% (n = 25) of the sera, respectively. Highest reaction intensities were observed with synthetic peptides mimicking type II-specific epitopes. A proportion of the sera (n = 22; 13%) showed no reaction with type-specific peptides. Individuals with acute toxoplasmosis reacted with a statistically significantly higher number of peptides as compared to individuals with latent T. gondii infection or seropositive forest workers. CONCLUSIONS: Type II-specific reactions were overrepresented and higher in intensity in the study population, which was in accord with genotyping studies on T. gondii oocysts previously conducted in the same area. There were also individuals with type I- or type III-specific reactions. Well-characterized reference sera and further specific peptide markers are needed to establish and to perform future serotyping approaches with higher resolution
    corecore